Should trained judges be a random or fixed ANOVA effect?
There are two major “schools of thought” on this. One school advocates a formal, classical approach and the other a more pragmatic approach.
The formal position is that an ANOVA model effect is random only when it represents a random sample from some population. If the experimental units are not a random sample (such as a deliberately picked “control” and “prototype”), then the effect is considered “fixed”. The formal school argues that a fixed set of (trained) judges can not be considered a random sample from a population, and hence, the judge term should be fixed.
The pragmatic position is that it does not really matter whether a random sample has been drawn or not. All that matters is the scope of the inference that the investigator wishes to make. If they wish to make an inference that any similarly trained set of judges would also find the observed difference between the samples, then the investigator should treat the judge effect as random.
It is also sometimes argued that treating the judge effect as random is more “conservative” in the sense that you have created a higher hurdle for statistical significance. Of course, it almost goes without saying that this is potentially anti-conservative if the purpose of the test is to demonstrate similarity (e.g., cost reduction, parity claim, etc.).
Note that even if you treat the judge term as fixed, it does not follow that you won’t have a mixed model. Often, descriptive studies are replicated, and in that case, you can (by any school of thought) treat the replicates (and certain interactions) as random. The replicates can be considered draws from a population of potential panel sessions.
Journal articles on this topic:
- Lundahl, D.S. and M.R. McDaniel (1988) The Panelist Effect – Fixed or Random?, Journal of Sensory Studies, 3(2): 113
- (Various Authors) Food Quality and Preference, Volume 9, Issue 3, May 1998, Pages 145-178.